An Approach to Stable Gradient-Descent Adaptation of Higher Order Neural Units
نویسندگان
چکیده
منابع مشابه
Cost-Sensitive Approach to Batch Size Adaptation for Gradient Descent
In this paper we propose a novel approach to automatically determine the batch size in stochastic gradient descent methods. The choice of the batch size induces a trade-off between the accuracy of the gradient estimate and the cost in terms of samples of each update. We propose to determine the batch size by optimizing the ratio between a lower bound to a linear or quadratic Taylor approximatio...
متن کاملHandwritten Character Recognition using Modified Gradient Descent Technique of Neural Networks and Representation of Conjugate Descent for Training Patterns
The purpose of this study is to analyze the performance of Back propagation algorithm with changing training patterns and the second momentum term in feed forward neural networks. This analysis is conducted on 250 different words of three small letters from the English alphabet. These words are presented to two vertical segmentation programs which are designed in MATLAB and based on portions (1...
متن کاملGradient descent GAN optimization is locally stable
REFERENCES 1. H. K Khalil. Non-linear Systems. Prentice-Hall, New Jersey, 1996. 2. L. Metz, et al., Unrolled generative adversarial networks. (ICLR 2017) 3. M. Heusel et al., GANs trained by a TTUR converge to a local Nash equilibrium (NIPS 2017) 4. I. J. Goodfellow et al., Generative Adversarial Networks (NIPS 2014) An increasingly popular class of generative models — models that “understand” ...
متن کاملBlock Transform Adaptation by Stochastic Gradient Descent
The problem of computing the eigendecomposition of an N N symmetric matrix is cast as an unconstrained minimization of either of two performance measures. The K = N(N 1)=2 independent parameters represent angles of distinct Givens rotations. Gradient descent is applied to the minimization problem, step size bounds for local convergence are given, and similarities to LMS adaptive filtering are n...
متن کاملLocal Gain Adaptation in Stochastic Gradient Descent
Gain adaptation algorithms for neural networks typically adjust learning rates by monitoring the correlation between successive gradients. Here we discuss the limitations of this approach, and develop an alternative by extending Sutton’s work on linear systems to the general, nonlinear case. The resulting online algorithms are computationally little more expensive than other acceleration techni...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Neural Networks and Learning Systems
سال: 2017
ISSN: 2162-237X,2162-2388
DOI: 10.1109/tnnls.2016.2572310